The search functionality is under construction.

Keyword Search Result

[Keyword] machine learning(172hit)

41-60hit(172hit)

  • Performance Evaluation of Classification and Verification with Quadrant IQ Transition Image

    Hiro TAMURA  Kiyoshi YANAGISAWA  Atsushi SHIRANE  Kenichi OKADA  

     
    PAPER-Network Management/Operation

      Pubricized:
    2021/12/01
      Vol:
    E105-B No:5
      Page(s):
    580-587

    This paper presents a physical layer wireless device identification method that uses a convolutional neural network (CNN) operating on a quadrant IQ transition image. This work introduces classification and detection tasks in one process. The proposed method can identify IoT wireless devices by exploiting their RF fingerprints, a technology to identify wireless devices by using unique variations in analog signals. We propose a quadrant IQ image technique to reduce the size of CNN while maintaining accuracy. The CNN utilizes the IQ transition image, which image processing cut out into four-part. An over-the-air experiment is performed on six Zigbee wireless devices to confirm the proposed identification method's validity. The measurement results demonstrate that the proposed method can achieve 99% accuracy with the light-weight CNN model with 36,500 weight parameters in serial use and 146,000 in parallel use. Furthermore, the proposed threshold algorithm can verify the authenticity using one classifier and achieved 80% accuracy for further secured wireless communication. This work also introduces the identification of expanded signals with SNR between 10 to 30dB. As a result, at SNR values above 20dB, the proposals achieve classification and detection accuracies of 87% and 80%, respectively.

  • Accuracy Improvement in DOA Estimation with Deep Learning Open Access

    Yuya KASE  Toshihiko NISHIMURA  Takeo OHGANE  Yasutaka OGAWA  Takanori SATO  Yoshihisa KISHIYAMA  

     
    PAPER-Antennas and Propagation

      Pubricized:
    2021/12/01
      Vol:
    E105-B No:5
      Page(s):
    588-599

    Direction of arrival (DOA) estimation of wireless signals is demanded in many applications. In addition to classical methods such as MUSIC and ESPRIT, non-linear algorithms such as compressed sensing have become common subjects of study recently. Deep learning or machine learning is also known as a non-linear algorithm and has been applied in various fields. Generally, DOA estimation using deep learning is classified as on-grid estimation. A major problem of on-grid estimation is that the accuracy may be degraded when the DOA is near the boundary. To reduce such estimation errors, we propose a method of combining two DNNs whose grids are offset by one half of the grid size. Simulation results show that our proposal outperforms MUSIC which is a typical off-grid estimation method. Furthermore, it is shown that the DNN specially trained for a close DOA case achieves very high accuracy for that case compared with MUSIC.

  • SIBYL: A Method for Detecting Similar Binary Functions Using Machine Learning

    Yuma MASUBUCHI  Masaki HASHIMOTO  Akira OTSUKA  

     
    PAPER-Dependable Computing

      Pubricized:
    2021/12/28
      Vol:
    E105-D No:4
      Page(s):
    755-765

    Binary code similarity comparison methods are mainly used to find bugs in software, to detect software plagiarism, and to reduce the workload during malware analysis. In this paper, we propose a method to compare the binary code similarity of each function by using a combination of Control Flow Graphs (CFGs) and disassembled instruction sequences contained in each function, and to detect a function with high similarity to a specified function. One of the challenges in performing similarity comparisons is that different compile-time optimizations and different architectures produce different binary code. The main units for comparing code are instructions, basic blocks and functions. The challenge of functions is that they have a graph structure in which basic blocks are combined, making it relatively difficult to derive similarity. However, analysis tools such as IDA, display the disassembled instruction sequence in function units. Detecting similarity on a function basis has the advantage of facilitating simplified understanding by analysts. To solve the aforementioned challenges, we use machine learning methods in the field of natural language processing. In this field, there is a Transformer model, as of 2017, that updates each record for various language processing tasks, and as of 2021, Transformer is the basis for BERT, which updates each record for language processing tasks. There is also a method called node2vec, which uses machine learning techniques to capture the features of each node from the graph structure. In this paper, we propose SIBYL, a combination of Transformer and node2vec. In SIBYL, a method called Triplet-Loss is used during learning so that similar items are brought closer and dissimilar items are moved away. To evaluate SIBYL, we created a new dataset using open-source software widely used in the real world, and conducted training and evaluation experiments using the dataset. In the evaluation experiments, we evaluated the similarity of binary codes across different architectures using evaluation indices such as Rank1 and MRR. The experimental results showed that SIBYL outperforms existing research. We believe that this is due to the fact that machine learning has been able to capture the features of the graph structure and the order of instructions on a function-by-function basis. The results of these experiments are presented in detail, followed by a discussion and conclusion.

  • Machine Learning Based Hardware Trojan Detection Using Electromagnetic Emanation

    Junko TAKAHASHI  Keiichi OKABE  Hiroki ITOH  Xuan-Thuy NGO  Sylvain GUILLEY  Ritu-Ranjan SHRIVASTWA  Mushir AHMED  Patrick LEJOLY  

     
    PAPER

      Pubricized:
    2021/09/30
      Vol:
    E105-A No:3
      Page(s):
    311-325

    The growing threat of Hardware Trojans (HT) in the System-on-Chips (SoC) industry has given way to the embedded systems researchers to propose a series of detection methodologies to identify and detect the presence of Trojan circuits or logics inside a host design in the various stages of the chip design and manufacturing process. Many state of the art works propose different techniques for HT detection among which the popular choice remains the Side-Channel Analysis (SCA) based methods that perform differential analysis targeting the difference in consumption of power, change in electromagnetic emanation or the delay in propagation of logic in various paths of the circuit. Even though the effectiveness of these methods are well established, the evaluation is carried out on simplistic models such as AES coprocessors and the analytical approaches used for these methods are limited by some statistical metrics such as direct comparison of EM traces or the T-test coefficients. In this paper, we propose two new detection methodologies based on Machine Learning algorithms. The first method consists in applying the supervised Machine Learning (ML) algorithms on raw EM traces for the classification and detection of HT. It offers a detection rate close to 90% and false negative smaller than 5%. In the second method, we propose an outlier/novelty algorithms based approach. This method combined with the T-test based signal processing technique, when compared with state-of-the-art, offers a better performance with a detection rate close to 100% and a false positive smaller than 1%. In different experiments, the false negative is nearly the same level than the false positive and for that reason the authors only show the false positive value on the results. We have evaluated the performance of our method on a complex target design: RISC-V generic processor. Three HTs with their corresponding sizes: 0.53%, 0.27% and 0.09% of the RISC-V processors are inserted for the experimentation. In this paper we provide elaborative details of our tests and experimental process for reproducibility. The experimental results show that the inserted HTs, though minimalistic, can be successfully detected using our new methodology.

  • Link Availability Prediction Based on Machine Learning for Opportunistic Networks in Oceans

    Lige GE  Shengming JIANG  Xiaowei WANG  Yanli XU  Ruoyu FENG  Zhichao ZHENG  

     
    LETTER-Reliability, Maintainability and Safety Analysis

      Pubricized:
    2021/08/24
      Vol:
    E105-A No:3
      Page(s):
    598-602

    Along with the fast development of blue economy, wireless communication in oceans has received extensive attention in recent years, and opportunistic networks without any aid from fixed infrastructure or centralized management are expected to play an important role in such highly dynamic environments. Here, link prediction can help nodes to select proper links for data forwarding to reduce transmission failure. The existing prediction schemes are mainly based on analytical models with no adaptability, and consider relatively simple and small terrestrial wireless networks. In this paper, we propose a new link prediction algorithm based on machine learning, which is composed of an extractor of convolutional layers and an estimator of long short-term memory to extract useful representations of time-series data and identify effective long-term dependencies. The experiments manifest that the proposed scheme is more effective and flexible compared with the other link prediction schemes.

  • GPGPU Implementation of Variational Bayesian Gaussian Mixture Models

    Hiroki NISHIMOTO  Renyuan ZHANG  Yasuhiko NAKASHIMA  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2021/11/24
      Vol:
    E105-D No:3
      Page(s):
    611-622

    The efficient implementation strategy for speeding up high-quality clustering algorithms is developed on the basis of general purpose graphic processing units (GPGPUs) in this work. Among various clustering algorithms, a sophisticated Gaussian mixture model (GMM) by estimating parameters through variational Bayesian (VB) mechanism is conducted due to its superior performances. Since the VB-GMM methodology is computation-hungry, the GPGPU is employed to carry out massive matrix-computations. To efficiently migrate the conventional CPU-oriented schemes of VB-GMM onto GPGPU platforms, an entire migration-flow with thirteen stages is presented in detail. The CPU-GPGPU co-operation scheme, execution re-order, and memory access optimization are proposed for optimizing the GPGPU utilization and maximizing the clustering speed. Five types of real-world applications along with relevant data-sets are introduced for the cross-validation. From the experimental results, the feasibility of implementing VB-GMM algorithm by GPGPU is verified with practical benefits. The proposed GPGPU migration achieves 192x speedup in maximum. Furthermore, it succeeded in identifying the proper number of clusters, which is hardly conducted by the EM-algotihm.

  • Feature Description with Feature Point Registration Error Using Local and Global Point Cloud Encoders

    Kenshiro TAMATA  Tomohiro MASHITA  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2021/10/11
      Vol:
    E105-D No:1
      Page(s):
    134-140

    A typical approach to reconstructing a 3D environment model is scanning the environment with a depth sensor and fitting the accumulated point cloud to 3D models. In this kind of scenario, a general 3D environment reconstruction application assumes temporally continuous scanning. However in some practical uses, this assumption is unacceptable. Thus, a point cloud matching method for stitching several non-continuous 3D scans is required. Point cloud matching often includes errors in the feature point detection because a point cloud is basically a sparse sampling of the real environment, and it may include quantization errors that cannot be ignored. Moreover, depth sensors tend to have errors due to the reflective properties of the observed surface. We therefore make the assumption that feature point pairs between two point clouds will include errors. In this work, we propose a feature description method robust to the feature point registration error described above. To achieve this goal, we designed a deep learning based feature description model that consists of a local feature description around the feature points and a global feature description of the entire point cloud. To obtain a feature description robust to feature point registration error, we input feature point pairs with errors and train the models with metric learning. Experimental results show that our feature description model can correctly estimate whether the feature point pair is close enough to be considered a match or not even when the feature point registration errors are large, and our model can estimate with higher accuracy in comparison to methods such as FPFH or 3DMatch. In addition, we conducted experiments for combinations of input point clouds, including local or global point clouds, both types of point cloud, and encoders.

  • Multi-Model Selective Backdoor Attack with Different Trigger Positions

    Hyun KWON  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2021/10/21
      Vol:
    E105-D No:1
      Page(s):
    170-174

    Deep neural networks show good performance in image recognition, speech recognition, and pattern analysis. However, deep neural networks show weaknesses, one of which is vulnerability to backdoor attacks. A backdoor attack performs additional training of the target model on backdoor samples that contain a specific trigger so that normal data without the trigger will be correctly classified by the model, but the backdoor samples with the specific trigger will be incorrectly classified by the model. Various studies on such backdoor attacks have been conducted. However, the existing backdoor attack causes misclassification by one classifier. In certain situations, it may be necessary to carry out a selective backdoor attack on a specific model in an environment with multiple models. In this paper, we propose a multi-model selective backdoor attack method that misleads each model to misclassify samples into a different class according to the position of the trigger. The experiment for this study used MNIST and Fashion-MNIST as datasets and TensorFlow as the machine learning library. The results show that the proposed scheme has a 100% average attack success rate for each model while maintaining 97.1% and 90.9% accuracy on the original samples for MNIST and Fashion-MNIST, respectively.

  • Kernel-Based Hamilton-Jacobi Equations for Data-Driven Optimal Control: The General Case Open Access

    Yuji ITO  Kenji FUJIMOTO  

     
    INVITED PAPER-Systems and Control

      Pubricized:
    2021/07/12
      Vol:
    E105-A No:1
      Page(s):
    1-10

    Recently, control theory using machine learning, which is useful for the control of unknown systems, has attracted significant attention. This study focuses on such a topic with optimal control problems for unknown nonlinear systems. Because optimal controllers are designed based on mathematical models of the systems, it is challenging to obtain models with insufficient knowledge of the systems. Kernel functions are promising for developing data-driven models with limited knowledge. However, the complex forms of such kernel-based models make it difficult to design the optimal controllers. The design corresponds to solving Hamilton-Jacobi (HJ) equations because their solutions provide optimal controllers. Therefore, the aim of this study is to derive certain kernel-based models for which the HJ equations are solved in an exact sense, which is an extended version of the authors' former work. The HJ equations are decomposed into tractable algebraic matrix equations and nonlinear functions. Solving the matrix equations enables us to obtain the optimal controllers of the model. A numerical simulation demonstrates that kernel-based models and controllers are successfully developed.

  • Effects of Image Processing Operations on Adversarial Noise and Their Use in Detecting and Correcting Adversarial Images Open Access

    Huy H. NGUYEN  Minoru KURIBAYASHI  Junichi YAMAGISHI  Isao ECHIZEN  

     
    PAPER

      Pubricized:
    2021/10/05
      Vol:
    E105-D No:1
      Page(s):
    65-77

    Deep neural networks (DNNs) have achieved excellent performance on several tasks and have been widely applied in both academia and industry. However, DNNs are vulnerable to adversarial machine learning attacks in which noise is added to the input to change the networks' output. Consequently, DNN-based mission-critical applications such as those used in self-driving vehicles have reduced reliability and could cause severe accidents and damage. Moreover, adversarial examples could be used to poison DNN training data, resulting in corruptions of trained models. Besides the need for detecting adversarial examples, correcting them is important for restoring data and system functionality to normal. We have developed methods for detecting and correcting adversarial images that use multiple image processing operations with multiple parameter values. For detection, we devised a statistical-based method that outperforms the feature squeezing method. For correction, we devised a method that uses for the first time two levels of correction. The first level is label correction, with the focus on restoring the adversarial images' original predicted labels (for use in the current task). The second level is image correction, with the focus on both the correctness and quality of the corrected images (for use in the current and other tasks). Our experiments demonstrated that the correction method could correct nearly 90% of the adversarial images created by classical adversarial attacks and affected only about 2% of the normal images.

  • Performance Comparison of Training Datasets for System Call-Based Malware Detection with Thread Information

    Yuki KAJIWARA  Junjun ZHENG  Koichi MOURI  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2021/09/21
      Vol:
    E104-D No:12
      Page(s):
    2173-2183

    The number of malware, including variants and new types, is dramatically increasing over the years, posing one of the greatest cybersecurity threats nowadays. To counteract such security threats, it is crucial to detect malware accurately and early enough. The recent advances in machine learning technology have brought increasing interest in malware detection. A number of research studies have been conducted in the field. It is well known that malware detection accuracy largely depends on the training dataset used. Creating a suitable training dataset for efficient malware detection is thus crucial. Different works usually use their own dataset; therefore, a dataset is only effective for one detection method, and strictly comparing several methods using a common training dataset is difficult. In this paper, we focus on how to create a training dataset for efficiently detecting malware. To achieve our goal, the first step is to clarify the information that can accurately characterize malware. This paper concentrates on threads, by treating them as important information for characterizing malware. Specifically, on the basis of the dynamic analysis log from the Alkanet, a system call tracer, we obtain the thread information and classify the thread information processing into four patterns. Then the malware detection is performed using the number of transitions of system calls appearing in the thread as a feature. Our comparative experimental results showed that the primary thread information is important and useful for detecting malware with high accuracy.

  • A Two-Stage Hardware Trojan Detection Method Considering the Trojan Probability of Neighbor Nets

    Kento HASEGAWA  Tomotaka INOUE  Nozomu TOGAWA  

     
    PAPER

      Pubricized:
    2021/05/12
      Vol:
    E104-A No:11
      Page(s):
    1516-1525

    Due to the rapid growth of the information industry, various Internet of Things (IoT) devices have been widely used in our daily lives. Since the demand for low-cost and high-performance hardware devices has increased, malicious third-party vendors may insert malicious circuits into the products to degrade their performance or to leak secret information stored at the devices. The malicious circuit surreptitiously inserted into the hardware products is known as a ‘hardware Trojan.’ How to detect hardware Trojans becomes a significant concern in recent hardware production. In this paper, we propose a hardware Trojan detection method that employs two-stage neural networks and effectively utilizes the Trojan probability of neighbor nets. At the first stage, the 11 Trojan features are extracted from the nets in a given netlist, and then we estimate the Trojan probability that shows the probability of the Trojan nets. At the second stage, we learn the Trojan probability of the neighbor nets for each net in the netlist and classify the nets into a set of normal nets and Trojan ones. The experimental results demonstrate that the average true positive rate becomes 83.6%, and the average true negative rate becomes 96.5%, which is sufficiently high compared to the existing methods.

  • Evaluation Metrics for the Cost of Data Movement in Deep Neural Network Acceleration

    Hongjie XU  Jun SHIOMI  Hidetoshi ONODERA  

     
    PAPER

      Pubricized:
    2021/06/01
      Vol:
    E104-A No:11
      Page(s):
    1488-1498

    Hardware accelerators are designed to support a specialized processing dataflow for everchanging deep neural networks (DNNs) under various processing environments. This paper introduces two hardware properties to describe the cost of data movement in each memory hierarchy. Based on the hardware properties, this paper proposes a set of evaluation metrics that are able to evaluate the number of memory accesses and the required memory capacity according to the specialized processing dataflow. Proposed metrics are able to analytically predict energy, throughput, and area of a hardware design without detailed implementation. Once a processing dataflow and constraints of hardware resources are determined, the proposed evaluation metrics quickly quantify the expected hardware benefits, thereby reducing design time.

  • An Effective Feature Extraction Mechanism for Intrusion Detection System

    Cheng-Chung KUO  Ding-Kai TSENG  Chun-Wei TSAI  Chu-Sing YANG  

     
    PAPER

      Pubricized:
    2021/07/27
      Vol:
    E104-D No:11
      Page(s):
    1814-1827

    The development of an efficient detection mechanism to determine malicious network traffic has been a critical research topic in the field of network security in recent years. This study implemented an intrusion-detection system (IDS) based on a machine learning algorithm to periodically convert and analyze real network traffic in the campus environment in almost real time. The focuses of this study are on determining how to improve the detection rate of an IDS and how to detect more non-well-known port attacks apart from the traditional rule-based system. Four new features are used to increase the discriminant accuracy. In addition, an algorithm for balancing the data set was used to construct the training data set, which can also enable the learning model to more accurately reflect situations in real environment.

  • Improving the Recognition Accuracy of a Sound Communication System Designed with a Neural Network

    Kosei OZEKI  Naofumi AOKI  Saki ANAZAWA  Yoshinori DOBASHI  Kenichi IKEDA  Hiroshi YASUDA  

     
    PAPER-Engineering Acoustics

      Pubricized:
    2021/05/06
      Vol:
    E104-A No:11
      Page(s):
    1577-1584

    This study has developed a system that performs data communications using high frequency bands of sound signals. Unlike radio communication systems using advanced wireless devices, it only requires the legacy devices such as microphones and speakers employed in ordinary telephony communication systems. In this study, we have investigated the possibility of a machine learning approach to improve the recognition accuracy identifying binary symbols exchanged through sound media. This paper describes some experimental results evaluating the performance of our proposed technique employing a neural network as its classifier of binary symbols. The experimental results indicate that the proposed technique may have a certain appropriateness for designing an optimal classifier for the symbol identification task.

  • A Survey on Spectrum Sensing and Learning Technologies for 6G Open Access

    Zihang SONG  Yue GAO  Rahim TAFAZOLLI  

     
    INVITED PAPER

      Pubricized:
    2021/04/26
      Vol:
    E104-B No:10
      Page(s):
    1207-1216

    Cognitive radio provides a feasible solution for alleviating the lack of spectrum resources by enabling secondary users to access the unused spectrum dynamically. Spectrum sensing and learning, as the fundamental function for dynamic spectrum sharing in 5G evolution and 6G wireless systems, have been research hotspots worldwide. This paper reviews classic narrowband and wideband spectrum sensing and learning algorithms. The sub-sampling framework and recovery algorithms based on compressed sensing theory and their hardware implementation are discussed under the trend of high channel bandwidth and large capacity to be deployed in 5G evolution and 6G communication systems. This paper also investigates and summarizes the recent progress in machine learning for spectrum sensing technology.

  • Explanatory Rule Generation for Advanced Driver Assistant Systems

    Juha HOVI  Ryutaro ICHISE  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2021/06/11
      Vol:
    E104-D No:9
      Page(s):
    1427-1439

    Autonomous vehicles and advanced driver assistant systems (ADAS) are receiving notable attention as research fields in both academia and private industry. Some decision-making systems use sets of logical rules to map knowledge of the ego-vehicle and its environment into actions the ego-vehicle should take. However, such rulesets can be difficult to create — for example by manually writing them — due to the complexity of traffic as an operating environment. Furthermore, the building blocks of the rules must be defined. One common solution to this is using an ontology specifically aimed at describing traffic concepts and their hierarchy. These ontologies must have a certain expressive power to enable construction of useful rules. We propose a process of generating sets of explanatory rules for ADAS applications from data using ontology as a base vocabulary and present a ruleset generated as a result of our experiments that is correct for the scope of the experiment.

  • Conditional Wasserstein Generative Adversarial Networks for Rebalancing Iris Image Datasets

    Yung-Hui LI  Muhammad Saqlain ASLAM  Latifa Nabila HARFIYA  Ching-Chun CHANG  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2021/06/01
      Vol:
    E104-D No:9
      Page(s):
    1450-1458

    The recent development of deep learning-based generative models has sharply intensified the interest in data synthesis and its applications. Data synthesis takes on an added importance especially for some pattern recognition tasks in which some classes of data are rare and difficult to collect. In an iris dataset, for instance, the minority class samples include images of eyes with glasses, oversized or undersized pupils, misaligned iris locations, and iris occluded or contaminated by eyelids, eyelashes, or lighting reflections. Such class-imbalanced datasets often result in biased classification performance. Generative adversarial networks (GANs) are one of the most promising frameworks that learn to generate synthetic data through a two-player minimax game between a generator and a discriminator. In this paper, we utilized the state-of-the-art conditional Wasserstein generative adversarial network with gradient penalty (CWGAN-GP) for generating the minority class of iris images which saves huge amount of cost of human labors for rare data collection. With our model, the researcher can generate as many iris images of rare cases as they want and it helps to develop any deep learning algorithm whenever large size of dataset is needed.

  • Performance Evaluation of Online Machine Learning Models Based on Cyclic Dynamic and Feature-Adaptive Time Series

    Ahmed Salih AL-KHALEEFA  Rosilah HASSAN  Mohd Riduan AHMAD  Faizan QAMAR  Zheng WEN  Azana Hafizah MOHD AMAN  Keping YU  

     
    PAPER

      Pubricized:
    2021/05/14
      Vol:
    E104-D No:8
      Page(s):
    1172-1184

    Machine learning is becoming an attractive topic for researchers and industrial firms in the area of computational intelligence because of its proven effectiveness and performance in resolving real-world problems. However, some challenges such as precise search, intelligent discovery and intelligent learning need to be addressed and solved. One most important challenge is the non-steady performance of various machine learning models during online learning and operation. Online learning is the ability of a machine-learning model to modernize information without retraining the scheme when new information is available. To address this challenge, we evaluate and analyze four widely used online machine learning models: Online Sequential Extreme Learning Machine (OSELM), Feature Adaptive OSELM (FA-OSELM), Knowledge Preserving OSELM (KP-OSELM), and Infinite Term Memory OSELM (ITM-OSELM). Specifically, we provide a testbed for the models by building a framework and configuring various evaluation scenarios given different factors in the topological and mathematical aspects of the models. Furthermore, we generate different characteristics of the time series to be learned. Results prove the real impact of the tested parameters and scenarios on the models. In terms of accuracy, KP-OSELM and ITM-OSELM are superior to OSELM and FA-OSELM. With regard to time efficiency related to the percentage of decreases in active features, ITM-OSELM is superior to KP-OSELM.

  • Improved Hybrid Feature Selection Framework

    Weizhi LIAO  Guanglei YE  Weijun YAN  Yaheng MA  Dongzhou ZUO  

     
    PAPER

      Pubricized:
    2021/05/12
      Vol:
    E104-D No:8
      Page(s):
    1266-1273

    An efficient Feature selection strategy is important in the dimension reduction of data. Extensive existing research efforts could be summarized into three classes: Filter method, Wrapper method, and Embedded method. In this work, we propose an integrated two-stage feature extraction method, referred to as FWS, which combines Filter and Wrapper method to efficiently extract important features in an innovative hybrid mode. FWS conducts the first level of selection to filter out non-related features using correlation analysis and the second level selection to find out the near-optimal sub set that capturing valuable discrete features by evaluating the performance of predictive model trained on such sub set. Compared with the technologies such as mRMR and Relief-F, FWS significantly improves the detection performance through an integrated optimization strategy.Results show the performance superiority of the proposed solution over several well-known methods for feature selection.

41-60hit(172hit)